Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Real-time remaining life prediction method of Web software system based on self-attention-long short-term memory network
DANG Weichao, LI Tao, BAI Shangwang, GAO Gaimei, LIU Chunxia
Journal of Computer Applications    2021, 41 (8): 2346-2351.   DOI: 10.11772/j.issn.1001-9081.2020091486
Abstract307)      PDF (1238KB)(363)       Save
In order to predict the Remaining Useful Life (RUL) of Web software system in real time and accurately, taking into consideration the time sequence characteristics of the Web system health status performance indicators and the interdependence between the indicators, a real-time remaining life prediction method of Web software system based on Self-Attention-Long Short-Term Memory (Self-Attention-LSTM) network was proposed. Firstly, an accelerated life test platform was built to collect the performance indicators data reflecting the aging trend of the Web software system. Then, according to the time sequence characteristics of the performance indicators data, a Long Short-Term Memory (LSTM) recurrent neural network was constructed to extract the hidden layer characteristics of the performance indicators, and the self-attention mechanism was used to model the dependency relationship between the characteristics. Finally, the real-time RUL prediction value of the Web system was obtained. On three test sets, the proposed model was compared with the Back Propagation (BP) network and the conventional Recurrent Neural Network (RNN). Experimental results show that the Mean Absolute Error (MAE) of the model is 16.92% lower than that of LSTM on average, and the relative accuracy (Accuracy) of the model is 5.53% higher than that of LSTM on average, which verify the effectiveness of the RUL model of Self-Attention-LSTM network. It can be seen that the proposed method can provide technical support for optimizing the software rejuvenation decision of the Web system.
Reference | Related Articles | Metrics
Blockchain storage expansion model based on Chinese remainder theorem
QING Xinyi, CHEN Yuling, ZHOU Zhengqiang, TU Yuanchao, LI Tao
Journal of Computer Applications    2021, 41 (7): 1977-1982.   DOI: 10.11772/j.issn.1001-9081.2020081256
Abstract416)      PDF (1043KB)(321)       Save
Blockchain stores transaction data in the form of distributed ledger, and its nodes hold copies of current data by storing hash chain. Due to the particularity of the blockchain structure, the number of blocks increases over time and the storage pressure of nodes also increases with the increasing of blocks, so that the storage scalability has become one of the bottlenecks in blockchain development. To address this problem, a blockchain storage expansion model based on Chinese Remainder Theorem (CRT) was proposed. In the model, the blockchain was divided into high-security blocks and low-security blocks, which were stored by different storage strategies. Among them, low-security blocks were stored in the form of network-wide preservation (all nodes need to preserve the data), while the high-security blocks were stored in a distributed form after being sliced by the CRT-based partitioning algorithm. In addition, the error detection and correction of Redundant Residual Number System (RRNS) was used to restore data to prevent malicious node attacking, so as to improve the stability and integrity of data. Experimental results and security analysis show that the proposed model not only has security and fault tolerance ability, but also ensures the integrity of data, as well as effectively reduces the storage consumption of nodes and increases the storage scalability of the blockchain system.
Reference | Related Articles | Metrics
Relay selection strategy for cache-aided full-duplex simultaneous wireless information and power transfer system
SHI Anni, LI Taoshen, WANG Zhe, HE Lu
Journal of Computer Applications    2021, 41 (6): 1539-1545.   DOI: 10.11772/j.issn.1001-9081.2020121930
Abstract301)      PDF (1136KB)(453)       Save
In order to improve the performance of the Simultaneous Wireless Information and Power Transfer (SWIPT) system, a new cache-aided full-duplex relay collaborative system model was constructed, and the free Energy Access Points (EAPs) were considered as the extra energy supplement of relay nodes in the system. For the system throughput optimization problem, a new SWIPT relay selection strategy based on power allocation cooperation was proposed. Firstly, a problem model on the basis of the constraints such as communication service quality and source node transmit power was established. Secondly, the original nonlinear mixed integer programming problem was transformed into a pair of coupling optimization problems through mathematical transformation. Finally, the Karush-Kuhn-Tucker (KKT) condition was used to solve the internal optimization problem with the help of Lagrange function, so that the closed-form solution of the power allocation factor and the relay transmit power was obtained, and the external optimization problem was solved based on this result, so as to select the best relay for the cooperative communication. The simulation experimental results show that, the free EAPs and the configuration of cache for the relay are feasible and effective, and the proposed system is significantly better than the traditional relay cooperative communication systems in terms of throughput gain.
Reference | Related Articles | Metrics
Time lag based temporal dependency episodes discovery
GU Peiyue, LIU Zheng, LI Yun, LI Tao
Journal of Computer Applications    2019, 39 (2): 421-428.   DOI: 10.11772/j.issn.1001-9081.2018061366
Abstract414)      PDF (1181KB)(290)       Save
Concerning the problem that a predefined time window is usually used to mine simple association dependencies between events in the traditional frequent episode discovery, which cannot effectively handle interleaved temporal correlations between events, a concept of time-lag episode discovery was proposed. And on the basis of frequent episode discovery, Adjacent Event Matching set (AEM) based time-lag episode discovery algorithm was proposed. Firstly, a probability statistical model introduced with time-lag was introduced to realize event sequence matching and handle optional interleaved associations without a predefined time window. Then the discovery of time lag was formulated as an optimization problem which can be solved iteratively to obtain time interval distribution between time-lag episodes. Finally, the hypothesis test was used to distinguish serial and parallel time-lag episodes. The experimental results show that compared with Iterative Closest Event (ICE) algorithm which is the latest method of time-lag mining, the Kullback-Leibler (KL) divergence between true and experimental distributions discovered by AEM is 0.056 on average, which is decreased by 20.68%. AEM algorithm measures the possibility of multiple matches of events through a probability statistical model of time lag and obtains a one-to-many adjacent event matching set, which is more effective than the one-to-one matching set in ICE for simulating the actual situation.
Reference | Related Articles | Metrics
Quality evaluation model of network operation and maintenance based on correlation analysis
WU Muyang, LIU Zheng, WANG Yang, LI Yun, LI Tao
Journal of Computer Applications    2018, 38 (9): 2535-2542.   DOI: 10.11772/j.issn.1001-9081.2018020412
Abstract571)      PDF (1421KB)(355)       Save
Traditional network operation and maintenance evaluation method has two problems. First, it is too dependent on domain experts' experience in indicator selection and weight assignment, so that it is difficult to obtain accurate and comprehensive assessment results. Second, the network operation and maintenance quality involves data from multiple manufacturers or multiple devices in different formats and types, and a surge of users brings huge amounts of data. To solve the problems mentioned above, an indicator selection method based on correlation was proposed. The method focuses on the steps of indicator selection in the process of evaluation. By comparing the strength of the correlation between the data series of indicators, the original indicators could be classified into different clusters, and then the key indicators in each cluster could be selected to construct a key indicators system. The data processing methods and weight determination methods without human participation were also utilized into the network operation and maintenance quality evaluation model. In the experiments, the indicators selected by the proposed method cover 72.2% of the artificial indicators. The information overlap rate is 31% less than the manual indicators'. The proposed method can effectively reduce human involvement, and has a higher prediction accuracy for the alarm.
Reference | Related Articles | Metrics
Adaptive bi-l p-l 2-norm based blind super-resolution reconstruction for single blurred image
LI Tao, HE Xiaohai, TENG Qizhi, WU Xiaoqiang
Journal of Computer Applications    2017, 37 (8): 2313-2318.   DOI: 10.11772/j.issn.1001-9081.2017.08.2313
Abstract521)      PDF (972KB)(583)       Save
An adaptive bi- l p- l 2-norm based blind super-resolution reconstruction method was proposed to improve the quality of a low-resolution blurred image, which includes independent blur-kernel estimation sub-process and non-blind super-resolution reconstruction sub-process. In the blur-kernel estimation sub-process, the bi- l p- l 2-norm regularization was imposed on both the sharp image and the blur-kernel. Moreover, by introducing threshold segmentation of image gradients, the l p-norm and the l 2-norm constraints on the sharp image were adaptively combined. With the estimated blur-kernel, the non-blind super-resolution reconstruction method based on non-locally centralized sparse representation was used to reconstruct the final high-resolution image. In the simulation experiments, compared with the bi- l 0- l 2-norm based method, the average Peak Signal-to-Noise Ratio (PSNR) gain of the proposed method was 0.16 dB higher, the average Structural Similarity Index Measure (SSIM) gain was 0.0045 higher, and the average reduction of Sum of Squared Difference (SSD) ratio was 0.13 lower. The experimental results demonstrate a superior performance of the proposed method in terms of kernel estimation accuracy and reconstructed image quality.
Reference | Related Articles | Metrics
Tracking mechanism of service provenance based on graph
LUO Bo, LI Tao, WANG Jie
Journal of Computer Applications    2016, 36 (6): 1650-1653.   DOI: 10.11772/j.issn.1001-9081.2016.06.1650
Abstract397)      PDF (691KB)(299)       Save
Service provenance data stored in relational database and document database cannot provide effective service tracking operations and graphic database storage cannot execute rapid aggregation operations. In order to solve the problems, a new service provenance tracking mechanism based on graph was proposed. On the basis of graphic database storage service provenance tracking mechanism, the service provenance storage structure in the graphic database was defined, and the aggregation operation for this storage structure was provided. Then the three different service provenance tracking models which were separately based on static weight, mixed operation and real-time task. The experimental results show that the proposed service provenance tracking mechanism can meet different query requirements of different types of service provenance data such as aggregation and tracking operation, reduces the service tracking time-consuming and improves the tracking efficiency of service provenance.
Reference | Related Articles | Metrics
Anomaly detection model based on danger theory of distributed service
LI Jinmin, LI Tao, XU Kai
Journal of Computer Applications    2015, 35 (9): 2519-2521.   DOI: 10.11772/j.issn.1001-9081.2015.09.2519
Abstract506)      PDF (607KB)(302)       Save
Concerning the problem that a large number of services' massive behavior data leads to inefficiency in anomaly detection of services and dynamic composition of services leads to uncertainty in service under the distributed environment, a new distributed service anomaly detection model based on danger theory was proposed. Firstly, inspired by the biological processes of artificial immune recognizing abnormalities, this paper used differentiation to describe the variation of massive services' behavior data, and constructed characteristic triad to detect abnormal source. Then, service guided by the idea of cloud model, this paper resolved uncertainty among services by constructing status cloud of the services and computing the degree of membership between services, and calculated the danger zone. Finally, the simulation experiments of student for selecting courses were carried out. According to the simulation results, the model not only detects abnormal services dynamically, but also describes of the dependencies between services accurately, and improves the anomaly detection efficiency. The simulation results verify the validity and effectiveness of the model.
Reference | Related Articles | Metrics
Optimized construction scheme of seeded-key matrices of collision-free combined public key
LI Tao, ZHANG Haiying, YANG Jun, YU Dan
Journal of Computer Applications    2015, 35 (1): 83-87.   DOI: 10.11772/j.issn.1001-9081.2015.01.0083
Abstract495)      PDF (716KB)(431)       Save

Concerning the problem of key collision and the storage space of matrices of seeded-key in Combined Public Key (CPK), a method of coefficient remapping was proposed and the rules of selecting the elements of seeded matrices were designed. Firstly, in the phase of identification mapping, the binary bit streams were produced, and they were divided into coefficient sequence and row sequence. Then the coefficient sequence was remapped according to the remapping rules, which could avoid that the coefficient was zero. So the storage space of the matrices was reduced by the coefficient remapping. Secondly, in the generation step of seeded-key matrix, based on the coefficient remapping, some rules were specified to choose elements to create matrices of seeded-key to ensure that the generated keys were exclusive. Finally, the elements of the matrices were selected according to the row sequence and the increasing column sequence. Then the public key and the private key were generated on the basis of the coefficient sequence and the selected elements. The theoretical analysis results suggest that the proposed scheme can optimize the storage of matrices and solve the key collision problem.

Reference | Related Articles | Metrics
Parallel implementation of OpenVX and 3D rendering on polymorphic graphics processing unit
YAN Youmei, LI Tao, WANG Pengbo, HAN Jungang, LI Xuedan, YAO Jing, QIAO Hong
Journal of Computer Applications    2015, 35 (1): 53-57.   DOI: 10.11772/j.issn.1001-9081.2015.01.0053
Abstract781)      PDF (742KB)(502)       Save

For the image processing, computer vision and 3D rendering have the feature of massive parallel processing, the programmability and the flexible mode of parallel processing on the Polymorphic Array Architecture for Graphics (PAAG) platform were utilized adequately, the parallelism design method by combing the operation level parallelism with data level parallelism was used to implement the OpenVX Kernel functions and 3D rendering pipelines. The experimental results indicate that in the parallel implementation of image processing of OpenVX Kernel functions and graphics rendering, using Multiple Instruction Multiple Data (MIMD) of PAAG in parallel processing can obtain a linear speedup that the slope equals to 1, which achieves higher efficiency than the slope as nonlinear speedup that less than 1 of Single Instruction Multiple Data (SIMD) in traditional parallel processing of the Graphics Processing Unit (GPU).

Reference | Related Articles | Metrics
Hardware/software partitioning based on greedy algorithm and simulated annealing algorithm
ZHANG Liang XU ChengCheng TIAN Zheng LI Tao
Journal of Computer Applications    2013, 33 (07): 1898-1902.   DOI: 10.11772/j.issn.1001-9081.2013.07.1898
Abstract869)      PDF (769KB)(587)       Save
Hardware/Software (HW/SW) partitioning is one of the crucial steps in the co-design of embedded system, and it has been proven to be a NP problem. Considering that the latest work has slow convergence speed and poor solution quality, the authors proposed a HW/SW partitioning method based on greedy algorithm and Simulated Annealing (SA) algorithm. This method reduced the HW/SW partitioning problem to the extended 0-1 knapsack problem, and used the greedy algorithm to do the initial rapid partition; then divided the solution space reasonably and designed a new cost function and used the improved SA algorithm to search for the global optimal solution. Compared to the existing improved algorithms, the experimental results show that the new algorithm is more effective and practical in terms of the quality of partitioning and the running time, and the promotion proportions are 8% and 17% respectively.
Reference | Related Articles | Metrics
Research of modern geometry based on mass point method
LI Tao ZOU Yu
Journal of Computer Applications    2012, 32 (11): 3057-3061.   DOI: 10.3724/SP.J.1087.2012.03057
Abstract876)      PDF (587KB)(408)       Save
Based on the mass point method, the paper developed a new Mathematica prover. With this prover, hundreds of modern geometric theorems had been proved for the first time, and the proof readability was also satisfactory. With its help, some of the new modern geometric properties were found, and some research results on modern geometry got deepened too.
Reference | Related Articles | Metrics
Multivariate regression analytical method based on heuristic constructed variable under condition of incomplete data
ZHANG Xi-xiang LI Tao-shen
Journal of Computer Applications    2012, 32 (08): 2202-2274.   DOI: 10.3724/SP.J.1087.2012.02202
Abstract887)      PDF (624KB)(383)       Save
Regression analysis is often used for filling and predicting incomplete data, whereas it has some flaws when constructing regression equation, the independent variable form is fixed and single. In order to solve the problem, the paper proposed an improved multivariate regression analytical method based on heuristic constructed variable. Firstly, the existing variables' optimized combination forms were found by means of greedy algorithm, then the new constructed variable for multivariate regression analysis was chosen to get a better goodness of fit. Results of calculating and estimating incomplete data of wheat stalks' mechanical strength prove that the proposed method is feasible and effective, and it can get a better goodness of fit when predicting incomplete data.
Reference | Related Articles | Metrics
Assisted interest-based method for searching shared files in P2P network
LI Tao Shi-ping CHEN
Journal of Computer Applications   
Abstract1582)      PDF (600KB)(694)       Save
Having studied the advantages and disadvantages of some traditional searching methods in structured and unstructured P2P networks, an assisted interest-based searching method was proposed to enhance the search in unstructured P2P overlay networks by registering interests of nodes in structured P2P overlay networks. Experimental results show that this method achieves good performance in success rate and search latency.
Related Articles | Metrics
Scalable Key management scheme based on member discovery protocol for IP multicast
WEI Chu-yuan,LI Tao-shen,WANG Gao-cai
Journal of Computer Applications    2005, 25 (10): 2291-2293.  
Abstract1630)      PDF (704KB)(1115)       Save
On the basis of analyzing existed schemes,a key management scheme based on member discovery protocol for IP multicast was proposed.Aiming at realistic Internet-like topologies,the member discovery protocol was embeded in our protocol to output a member overlay tree.The multicast group which consisted of all end-users was divided into some virtual subgroups by group-constructing algorithm.Logical Key Hierarchy(LKH) protocol was adopted to implement key management in every subgroup.The group key was distributed to all subgroup security controllers by the group security controller.The protocol was a two-level key management protocol.The improved protocol possesses better scalablity than other schemes,better performance about group rekeying and can be applied to large multicast group.
Related Articles | Metrics